Discuss critically the role of the parietal cortex in
spatial localization, eye movements and reaching.

Greg Detre

Thursday, 18 October, 2001

Dr J Iles, St Hughes, week 2

 

The parietal cortex has been traditionally labelled an �association� area, since it seems to function somewhere intermediate between sensory and motor functioning, involving and integrating both types of signals. It seems to play a vital role in the visuomotor stream (following Goodale & Haffenden�s (1998) terminology) used for directing action, often at a subconscious level. This seems to involve the representation and transformation between various coordinate frames of reference, in what Andersen et al (1997) terms a �multimodal representation of space�. More controversially, the parietal cortex also seems to contain neuronal activity relating to attention, intention and decision.

The major areas of the parietal cortex that are usually considered are: areas 7a and 7b, the lateral intraparietal area (LIP), the medial superior temporal area (MST � further subdivided into dorsal (MSTd) and lateral (MSTl) areas) and the ventral intraparietal area (VIP). All of the areas are strongly interconnected via cortico-cortical projections.

Broadly, LIP has been widely implicated in saccadic movements, with strong direct projections from extrastriate visual areas and projections to prefrontal cortex, the caudate nucleus and the superior colliculus, all of which are areas concerned with saccadic eye movements). MST seems highly involved in motion processing, with large RF neurons selective for expansion/contraction, rotation and spiraling (and sometimes more than one at a time). Area 7a has large bilateral fields, with strong cortical connections to other visual areas, as well as the parahippocampal gyrus and cingulate cortex. Area 7b and VIP are closely tied in with the somatosensory system, and to a lesser extent, vision.

Skilled forelimb movements appear to have originated early in tetrapod evolution, possibly as early as the divergence between amphibians and amniotes (Iwaniak & Whishaw (2000), On the origin of skilled forelimb movements).

The primary motor cortex is no longer seen as a simple somatotopic motor representation, but rather a multitude of representations that overlap, allowing the cortex to organise combinations of movements for specific tasks. Movement-related neurons in the premotor areas may fire during movements related to specific tasks and not others to encode a more global feature, such as set-related neurons which are active in the absence of any overt behaviour, e.g. during a delay between task instructions and execution. The presupplementary motor area is active during the learning of a behaviour, but becomes less active as learning progresses, with activity in the supplementary area eventually ceasing when the behaviour becomes automatic. Thus, the hierarchy of motor control gives rise to a hierarchy of task features. Parts of the parietal cortex, together with the motor areas, are heavily involved in the planning and execution of voluntary movements, producing motor programs from the coordinate frames in which the external environment is represented. Efference copies of motor commands, probably generated in the frontal lobes, probably provide information to the posterior parietal cortex about body movements, all coded in different coordinate frames, which may be employed in subtracting eye movements from visual signals to calculate direction of heading, for instance (Andersen et al, 1997 � see below).

 

Goodale & Haffenden (1998) argue that vision for perception and vision for action are mediated by separate neural mechanisms. They employ a wide range of psychological and psychophysical evidence, as well as dissocations to show that 'what we think we see is not always what guides our actions'.

They cite the case of DF, a young woman who developed a profound visual form agnosia following carbon monoxide-induced anoxia. Even thought DF's low-level visual abilities remained reasonably intact, she can no longer recognise object on the basis of form, or the faces of freinds and relatives, nor identify even simple geometric shapes. She is perfectly able to perform these actions using voices or touch though. However, DF's hand and finger movements seem almost unimpaired, even when picking up objects she cannot identify or recognise. She rotates her hand and write quite normally, her hand opens to grasp at the right size. It appears as though DF's visual system is no longer able to deliver perceptual information abotu the size, shape and orientation of objects, yet the visuomotor systems that control the programming and execution of visually-guided actions remain sensitive to these same object features.

Moreover, there is evidence that patients with damage to other visual areas in the cerebral cortex, e.g. the superior regions of the PPC have the opposite behaviour - they cannot use visual information to rotate their hand or scale finger distance when reaching, though they are perfectly able to describe the size or orientation of objects in that part of the visual field.

Goodale and Milner (1992) have propposed that these two streams correlate with the 'dorsal' and 'visual' streams identified in the cerebral cortex of the monkey. The dorsal stream travels from the primary visual cortex to the posterior parietal cortex, and which seems tied in with visuomotor functions.

 

Andersen et al (1997) argues for a wide range of highly important functions in the parietal cortex. Foremost among these is an abstract multi-modal distributed representation of space, combining vision, somatosensation, audition, and vestibular sensation, which �can then be used to construct multiple frames of reference to be used by motor structures to code appropriate movements�, as well as selecting stimuli and helping to plan movements. This fits in with Goodale & Haffenden�s description of the functions of the visuomotor system, and aligns with it in neuroanatomical terms too.

Andersen claims that areas 7a and LIP use their eye position and retinal input signals to represent the location of a visual target with respect to the head, a 'head-centred reference frame'. He concedes that 'intuitively one would imagine that an area representing space in a head-centred reference frame would have receptive fields that are anchored in space with respect to the head', but proposes instead that instead a highly distributed pattern is used to uniquely specify each head-centred location in the activity across a population of cells with different eye position and retinal position sensitivities. Indeed, he argues that 'when neural networks are trained to transform retinal signals into head-centred coordinates by using eye position signals, the middle-layer units that make the transformation gain fields similar to the cells in the parietal cortex (Zipser & Andersen, 1988)'.

These �gain fields� underly Andersen�s entire computational model of the parietal cortex. The idea is that one component of a (local) frame of reference is stripped away or subtracted, e.g. eye movements, to give a higher-order frame of reference (e.g. head-centred), by shifting the receptive fields of the cells, so that the inputs the cells are receiving are as they would have been had that dimension been kept static. By using the gain field mechanism, a variety of modalities in different coordinate frames can be integrated into a distributed representation of space. In this way, information is not collapsed and lost - for instance, if the gain field mechanism is used to produce a head-centred frame from retinal position and eye position, the eye-centred coordinates could be read out by another structured - the two components have not been converged - it's almost like shifting all the information in a spreadsheet one column along. Lesions to the posterior parietal cortex give rise to spatial deficits in multiple coordinate frames. This could be because many coordinate frames might conceivably representable in the same population of neurons. Or it could simply be that the different coordinate frames exist in close proximity to one another and so would all be affected at the same time.

The idea of gain fields is best understood with reference to their explanation of how MST might be able to calculate direction of heading from visual signals, or �'computing the direction of self-motion in the world based on the changing retinal image'. Finding one�s heading based on visual information is relatively easy if one is facing in the right direction, since it simply involves using the centre of the expanding visual motion generated by self-motion as the direction of heading (Gibson, 1950). In order to recover the direction of heading even when we are fixating/tracking an object that is not directly ahead of us though, we have to decompose the resulting optic flow field into a) the movement of the observer (expanding field) and b) eye rotation (linearly moving field). Andersen relies on Royden et al�s (1992) suggestion that an efference copy of the pursuit command may be crucial here. Handily, MSTd contains cells selective for one or more of the following: expansion-contraction, rotation and linear motion (Saito, 1986). However, it appears that MSTd is not decomposing the optic flow into channels of expansion, rotation and linear motion - Andersen produced a spiral space with expansion on one axis and rotation on another, and found that disappointingly few of the MSTd neurons had tuning curves aligned directly along these axes. Interestingly though, the MSTd neurons displayed a high degree of position and size invariance, as well as form/cue invariance. The MSTd cells seems to convey 'the abstract quality of a pattern of motion, e.g. rotation', which may be important in analysing optic flow by gathering information from any part of the visual field. MST may use cells sensitive to motion pattern in combination with the pursuit eye movement signal it receives to code direction of heading. They found that many MSTd neurons shift their receptive fields during eye pursuit movements to more faithfully code the direction of heading than the focus of expansion on the retina. For instance, when viewing an expanding pattern while making a pursuit movement towards, say, the left, the retinal position of the focus shifts left, which many expansion-selective MSTd neurons compensate for by shifting their receptive fields to the left (and often vice versa for rightward movements). In Andersen�s words:

When the eyes move, the focus tuning curve of these cells shifts in order to compensate for the retinal focus shift due to the eye movement. In this way MSTd could map out the relationship between the expansion focus and heading with relatively few neurons, each adjusting its focus preference according to the velocity of the eye.

This pursuit compensation is achieved by a non-uniform gain and distortion applied to different locations in the receptive field. Andersen acknowledges that Perrone & Stone's (1994) and Warren's (1995) models are similar, but require more neurons for separate heading maps for different combinations of eye direction and speed (rather than just eye movement). Andersen goes so far as to say that MSTd may compensate spatially for the consequences of eye movements for all patterns of motion.

Andersen applies the idea of gain fields to show how eye-centred, head-centred, body-centred and world-centred representations can be built up, conceivably even on top of one another, in the parietal cortex. This requires integrating retinal signals, eye movements, proprioceptive input (especially from the neck), auditory information (intra-aural time, intra-aural intensity and spectral cues from both ears), body motor outputs and vestibular input.

Furthermore, he claims that neural network models can illustrate methods employing gain fields to transform between coordinate frames. For example, Zipser & Andersen (1988) showed that when 'retinal position signals are converted to a map of the visual field in head-centred coordinates, the hidden units that perform this transformation develop gain fields very similar to those demonstrated in the posterior parietal cortex', and that the activities found for posterior parietal neurons could be the basis of a distributed representation of head-centred space. In Xing et al's (1995) model, which takes in head-centred auditory signals and eye position and retinal position signals as input, and whose output codes the metrics of a planned movement in motor coordinates, the middle layers develop overlapping receptive fields for auditory and visual stimuli and eye position gain fields. It is interesting that the visual signals also develop gain fields, since both the retinally based stimuli and the motor error signals are always aligned when training the network and, in principle, do not need to use eye position information. However, the auditory and visual signals share the same circuitry and distributed representation, which results in gain fields for the visual signals.

More controversially, Andersen claims that the PPC also contains circuitries that appear to be important for shifting attention, stimulus selection and movement planning. Patients with lesions to the PPC have difficulty shifting their focus of attention (Posner et al, 1984). It now seems that visual responsiveness of parietal neurons is actually reduced at the focus of attention (Robinson et al, 1995), while locations away from the focus of attention are more responsive, apparently signaling novel events for the shifting of attention.

Gnadt & Andersen (1988) have shown that activity in cells primarily in LIP (coding in oculomotor coordinates) precedes saccades. This activity is also memory-related, e.g. lighting up when a monkey is remembering the location of a briefly-flashed stimulus and, after a delay, made a saccade to the remembered location. Glimcher & Platt required an animal to attend to a distractor target, which was extinguished as a cue to saccade to the selected target, thus separating the focus of attention from the selected movement. For many of the cells, the activity reflected the movement plan and not the attended location, although the activity of some cells was influenced by the attended location. Andersen thinks that these and other studies suggest that a component of LIP activity is related to movements that the animal intends to make.

Mazzoni et al (1996) used a delayed double-saccade experiment to try and distinguish whether the memory activity was primarily related to intentions to make eye movements or to a sensory memory of the location of the target. They found both types of cells, with the majority of overall activity being related to the next intended saccade and not to the remembered stimulus location. This did not necessarily lead to execution of the movement, since the animals could be asked to change their planned eye movements during the delay period in a memory saccade task, and the intended movement activity in LIP would change correspondingly (Bracewell et al, 1996). If it could be shown that the activity is related to the type of movement being planned, it would be a strong indication that the activity is intention-related. Bushnell et al (1981) recorded from PPC neurons while the animal programmed an eye or reaching movement to a retinotopically identical stimulus. They claimed that the activity of the cells did not differentiate between these two types of movements, indicating that the PPC is concerned with sensory location and attention and not with planning movements. However, when Andersen et al repeated the experiment, they found that 2/3 of cells in the PPC were selective during the memory period for whether the target requires an arm or eye movement. Andersen considers Duhamel et al, 1992 (similar to Gnadt & Andersen, 1988) and Kalaska & Crammond, 1995 as studies in which their theory that the memory-related activity in the PPC signals the animal's plan to make a movement could explain the results. Thus, when stimulus-related activity comes into the parietal cortex, it can sometimes invoke more than one potential plan, e.g. both eye and limb movements, even if the limb movement is not executed.

 

It seems clear that the parietal cortex plays a variety of roles, relating sensory and motor functions. The majority of the debate centres around:

1.��� The nature and variety of the representations and transformations of coordinate reference frames encoded in the parietal cortex. Graziano and Gross (1998) argue that there is no single spatial coordinate system, but that the PPC carries the �raw data� necessary for other brain areas to construct spatial coordinate systems. The highest levels of spatial processing are deep within the motor system. They seem to suggest that the frontal lobes have more of a spatial role to play than PPC.

2.��� Whether cells in the parietal cortex signal intention and the type of movement to be performed (as opposed to being a sensory memory of the location of a target perhaps, or less likely, precursors for actual execution of movement), as Andersen claims. In contrast, Bushnell et al (1981) have claimed that the activity PPC cells while the animal programmed an eye or reaching movement to a retinotopically identical stimulus did not differentiate between these two types of movements, indicating that the PPC is concerned with sensory location and attention and not with planning movements.

However, all of the evidence supports a claim for a dichotomy of visual processing. Burr et al divide them into one for conscious perception (more plastic and subject to spatial distortion) and the other for the control of action. This is similar to Goodale & Milner�s (1992) distinction, but it is probably more helpful to think in terms of visuoperceptual and visuomotor functions, rather than phenomenology.